176 research outputs found

    The influence of scene context on object recognition is independent ofattentional focus

    Get PDF
    Copyright © 2013 Munneke, Brentari and Peelen. Humans can quickly and accurately recognize objects within briefly presented natural scenes. Previous work has provided evidence that scene context contributes to this process, demonstrating improved naming of objects that were presented in semantically consistent scenes (e.g., a sandcastle on a beach) relative to semantically inconsistent scenes (e.g., a sandcastle on a football field). The current study was aimed at investigating which processes underlie the scene consistency effect. Specifically, we tested: (1) whether the effect is due to increased visual feature and/or shape overlap for consistent relative to inconsistent scene-object pairs; and (2) whether the effect is mediated by attention to the background scene. Experiment 1 replicated the scene consistency effect of a previous report (Davenport and Potter, 2004). Using a new, carefully controlled stimulus set, Experiment 2 showed that the scene consistency effect could not be explained by low-level feature or shape overlap between scenes and target objects. Experiments 3a and 3b investigated whether focused attention modulates the scene consistency effect. By using a location cueing manipulation, participants were correctly informed about the location of the target object on a proportion of trials, allowing focused attention to be deployed toward the target object. Importantly, the effect of scene consistency on target object recognition was independent of spatial attention, and was observed both when attention was focused on the target object and when attention was focused on the background scene. These results indicate that a semantically consistent scene context benefits object recognition independently of the focus of attention. We suggest that the scene consistency effect is primarily driven by global scene properties, or "scene gist", that can be processed with minimal attentional resources

    Contorted and ordinary body postures in the human brain

    Get PDF
    Social interaction and comprehension of non-verbal behaviour requires a representation of people’s bodies. Research into the neural underpinnings of body representation implicates several brain regions including extrastriate and fusiform body areas (EBA and FBA), superior temporal sulcus (STS), inferior frontal gyrus (IFG) and inferior parietal lobule (IPL). The different roles played by these regions in parsing familiar and unfamiliar body postures remain unclear. We examined the responses of this body observation network to static images of ordinary and contorted postures by using a repetition suppression design in functional neuroimaging. Participants were scanned whilst observing static images of a contortionist or a group of objects in either ordinary or unusual configurations, presented from different viewpoints. Greater activity emerged in EBA and FBA when participants viewed contorted compared to ordinary body postures. Repeated presentation of the same posture from different viewpoints lead to suppressed responses in the fusiform gyrus as well as three regions that are characteristically activated by observing moving bodies, namely STS, IFG and IPL. These four regions did not distinguish the image viewpoint or the plausibility of the posture. Together, these data define a broad cortical network for processing static body postures, including regions classically associated with action observation

    Representation of body identity and body actions in extrastriate body area and ventral premotor cortex

    Full text link
    Although inherently linked, body form and body action may be represented in separate neural substrates. Using repetitive transcranial magnetic stimulation in healthy individuals, we show that interference with the extrastriate body area impairs the discrimination of bodily forms, and interference with the ventral premotor cortex impairs the discrimination of bodily actions. This double dissociation suggests that whereas extrastriate body area mainly processes actors' body identity, premotor cortex is crucial for visual discriminations of actions

    Multivoxel Pattern Analysis Reveals Auditory Motion Information in MT+ of Both Congenitally Blind and Sighted Individuals

    Get PDF
    Cross-modal plasticity refers to the recruitment of cortical regions involved in the processing of one modality (e.g. vision) for processing other modalities (e.g. audition). The principles determining how and where cross-modal plasticity occurs remain poorly understood. Here, we investigate these principles by testing responses to auditory motion in visual motion area MT+ of congenitally blind and sighted individuals. Replicating previous reports, we find that MT+ as a whole shows a strong and selective responses to auditory motion in congenitally blind but not sighted individuals, suggesting that the emergence of this univariate response depends on experience. Importantly, however, multivoxel pattern analyses showed that MT+ contained information about different auditory motion conditions in both blind and sighted individuals. These results were specific to MT+ and not found in early visual cortex. Basic sensitivity to auditory motion in MT+ is thus experience-independent, which may be a basis for the region's strong cross-modal recruitment in congenital blindness

    Neural correlates of enhanced visual short-term memory for angry faces: An fMRI study

    Get PDF
    Copyright: © 2008 Jackson et al.Background: Fluid and effective social communication requires that both face identity and emotional expression information are encoded and maintained in visual short-term memory (VSTM) to enable a coherent, ongoing picture of the world and its players. This appears to be of particular evolutionary importance when confronted with potentially threatening displays of emotion - previous research has shown better VSTM for angry versus happy or neutral face identities.Methodology/Principal Findings: Using functional magnetic resonance imaging, here we investigated the neural correlates of this angry face benefit in VSTM. Participants were shown between one and four to-be-remembered angry, happy, or neutral faces, and after a short retention delay they stated whether a single probe face had been present or not in the previous display. All faces in any one display expressed the same emotion, and the task required memory for face identity. We find enhanced VSTM for angry face identities and describe the right hemisphere brain network underpinning this effect, which involves the globus pallidus, superior temporal sulcus, and frontal lobe. Increased activity in the globus pallidus was significantly correlated with the angry benefit in VSTM. Areas modulated by emotion were distinct from those modulated by memory load.Conclusions/Significance: Our results provide evidence for a key role of the basal ganglia as an interface between emotion and cognition, supported by a frontal, temporal, and occipital network.The authors were supported by a Wellcome Trust grant (grant number 077185/Z/05/Z) and by BBSRC (UK) grant BBS/B/16178

    Adults' Awareness of Faces Follows Newborns' Looking Preferences

    Get PDF
    From the first days of life, humans preferentially orient towards upright faces, likely reflecting innate subcortical mechanisms. Here, we show that binocular rivalry can reveal face detection mechanisms in adults that are surprisingly similar to inborn face detection mechanism. We used continuous flash suppression (CFS), a variant of binocular rivalry, to render stimuli invisible at the beginning of each trial and measured the time upright and inverted stimuli needed to overcome such interocular suppression. Critically, specific stimulus properties previously shown to modulate looking preferences in neonates similarly modulated adults' awareness of faces presented during CFS. First, the advantage of upright faces in overcoming CFS was strongly modulated by contrast polarity and direction of illumination. Second, schematic patterns consisting of three dark blobs were suppressed for shorter durations when the arrangement of these blobs respected the face-like configuration of the eyes and the mouth, and this effect was modulated by contrast polarity. No such effects were obtained in a binocular control experiment not involving CFS, suggesting a crucial role for face-sensitive mechanisms operating outside of conscious awareness. These findings indicate that visual awareness of faces in adults is governed by perceptual mechanisms that are sensitive to similar stimulus properties as those modulating newborns' face preferences

    My Hand or Yours? Markedly Different Sensitivity to Egocentric and Allocentric Views in the Hand Laterality Task

    Get PDF
    In the hand laterality task participants judge the handedness of visually presented stimuli – images of hands shown in a variety of postures and views - and indicate whether they perceive a right or left hand. The task engages kinaesthetic and sensorimotor processes and is considered a standard example of motor imagery. However, in this study we find that while motor imagery holds across egocentric views of the stimuli (where the hands are likely to be one's own), it does not appear to hold across allocentric views (where the hands are likely to be another person's). First, we find that psychophysical sensitivity, d', is clearly demarcated between egocentric and allocentric views, being high for the former and low for the latter. Secondly, using mixed effects methods to analyse the chronometric data, we find high positive correlation between response times across egocentric views, suggesting a common use of motor imagery across these views. Correlations are, however, considerably lower between egocentric and allocentric views, suggesting a switch from motor imagery across these perspectives. We relate these findings to research showing that the extrastriate body area discriminates egocentric (‘self’) and allocentric (‘other’) views of the human body and of body parts, including hands

    A Hierarchical Probabilistic Model for Rapid Object Categorization in Natural Scenes

    Get PDF
    Humans can categorize objects in complex natural scenes within 100–150 ms. This amazing ability of rapid categorization has motivated many computational models. Most of these models require extensive training to obtain a decision boundary in a very high dimensional (e.g., ∼6,000 in a leading model) feature space and often categorize objects in natural scenes by categorizing the context that co-occurs with objects when objects do not occupy large portions of the scenes. It is thus unclear how humans achieve rapid scene categorization
    corecore